Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods
نویسندگان
چکیده
The estimators most widely used to evaluate the prediction error of a non-linear regression model are examined. An extensive simulation approach allowed the comparison of the performance of these estimators for different non-parametric methods, and with varying signal-to-noise ratio and sample size. Estimators based on resampling methods such as Leave-one-out, parametric and non-parametric Bootstrap, as well as repeated Cross Validation methods and Hold-out, were considered. The methods used are Regression Trees, Projection Pursuit Regression and Neural Networks. The repeated-corrected 10-fold Cross-Validation estimator and the Parametric Bootstrap estimator obtained the best performance in the simulations. © 2010 Elsevier B.V. All rights reserved.
منابع مشابه
The Estimation of Prediction Error: Covariance Penalties and Cross-Validation
Having constructed a data-based estimation rule, perhaps a logistic regression or a classification tree, the statistician would like to know its performance as a predictor of future cases. There are two main theories concerning prediction error: (1) penalty methods such as Cp, Akaike’s information criterion, and Stein’s unbiased risk estimate that depend on the covariance between data points an...
متن کاملAssessing Prediction Error of Nonparametric Regression and Classification under Bregman Divergence
Prediction error is critical to assessing the performance of statistical methods and selecting statistical models. We propose the cross-validation and approximated cross-validation methods for estimating prediction error under a broad q-class of Bregman divergence for error measures which embeds nearly all of the commonly used loss functions in regression, classification procedures and machine ...
متن کاملA comparison of bootstrap methods and an adjusted bootstrap approach for estimating prediction error in microarray classification Short title: Bootstrap Prediction Error Estimation
SUMMARY This paper first provides a critical review on some existing methods for estimating prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimen. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We intr...
متن کاملPrediction error estimation: a comparison of resampling methods
MOTIVATION In genomic studies, thousands of features are collected on relatively few samples. One of the goals of these studies is to build classifiers to predict the outcome of future observations. There are three inherent steps to this process: feature selection, model selection and prediction assessment. With a focus on prediction assessment, we compare several methods for estimating the 'tr...
متن کاملBootstrap-based Penalty Choice for the Lasso, Achieving Oracle Performance
In theory, if penalty parameters are chosen appropriately then the lasso can eliminate unnecessary variables in prediction problems, and improve the performance of predictors based on the variables that remain. However, standard methods for tuning-parameter choice, for example techniques based on the bootstrap or cross-validation, are not sufficiently accurate to achieve this level of precision...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Computational Statistics & Data Analysis
دوره 54 شماره
صفحات -
تاریخ انتشار 2010